Keywords: reinforcement learning, actor critic, policy improvement operators
Abstract: To learn approximately optimal acting policies for decision problems, modern Actor Critic algorithms rely on deep Neural Networks (DNNs) to parameterize the acting policy and greedification operators to iteratively improve it. 
The reliance on DNNs suggests an improvement that is gradient based, which is per step much less greedy than the improvement possible by greedier operators such as the greedy update used by Q-learning algorithms.
On the other hand, slow and steady changes to the policy can also be beneficial for the stability of the learning process, resulting in a tradeoff between greedification and stability.
To address this tradeoff, we propose to extend the standard framework of actor critic algorithms with value-improvement: a second greedification operator applied only when updating the policy's value estimate.
In this framework the agent can evaluate non-parameterized policies and perform much greedier updates while maintaining the steady gradient-based improvement to the parameterized acting policy.
We prove that this approach converges in the popular analysis scheme of generalized Policy Iteration in the finite-horizon domain.
Empirically, incorporating value-improvement into the popular off-policy actor-critic algorithms TD3 and SAC significantly improves or matches performance over their respective baselines, across different environments from the DeepMind continuous control domain, with negligible compute and implementation cost.
Supplementary Material:  zip
Primary Area: Reinforcement learning (e.g., decision and control, planning, hierarchical RL, robotics)
Submission Number: 16792
Loading