TL;DR: We derive a finite-time bound for an asymmetric actor-critic algorithm and compare it with the symmetric counterpart, offering a justification for the effectiveness of asymmetric learning
Abstract: In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoretically sound, these methods still lack a precise theoretical justification for their potential benefits. We propose such a justification for asymmetric actor-critic algorithms with linear function approximators by adapting a finite-time convergence analysis to this setting. The resulting finite-time bound reveals that the asymmetric critic eliminates error terms arising from aliasing in the agent state.
Lay Summary: Some intelligent agents learn faster by using extra information during training — like full knowledge of the environment’s state — even if that information is not available later. This is called asymmetric learning, and it works well in practice. But why does it work so well? In this paper, we offer a theoretical answer for a learning algorithm called the asymmetric actor-critic algorithm. We show that giving this extra information to part of the learning algorithm — the critic — reduces specific errors caused by limited observations. This makes learning more efficient, and our analysis explains when and why this advantage appears.
Primary Area: Theory->Reinforcement Learning and Planning
Keywords: Partially Observable Environment, Asymmetric Learning, Privileged Information, Privileged Critic, Convergence Analysis, Asymmetric Actor-Critic, Finite-Time Bound, Agent-State Policy, Aliasing
Submission Number: 15582
Loading