EAT-C: Environment-Adversarial sub-Task Curriculum for Efficient Reinforcement LearningDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: reinforcement learning, curriculum learning, sub-tasks, adversarial environment, path planning
Abstract: Reinforcement learning (RL)'s efficiency can drastically degrade on long-horizon tasks due to sparse rewards and the RL policy can be fragile to small changes in deployed environments. To improve RL's efficiency and generalization to varying environments, we study how to automatically generate a curriculum of tasks with coupled environments for RL. To this end, we train two curriculum policies together with RL: (1) a co-operative planning policy recursively decomposing a hard task into coarse-to-fine sub-task sequences as a tree; and (2) an adversarial policy modifying the environment (e.g., position/size of obstacles) in each sub-task. They are complementary in acquiring more informative feedback for RL: the planning policy provides dense reward of finishing easier sub-tasks while the environment policy modifies these sub-tasks to be adequately challenging and diverse so the RL agent can quickly adapt to different tasks/environments. On the other hand, they are trained using the RL agent's dense feedback on sub-tasks so the sub-task curriculum keeps adaptive to the agent's progress via this ``iterative mutual-boosting'' scheme. Moreover, the sub-task tree naturally enables an easy-to-hard curriculum for every policy: its top-down construction gradually increases sub-tasks the planning policy needs to generate, while the adversarial training between the environment policy and the RL policy follows a bottom-up traversal that starts from a dense sequence of easier sub-tasks allowing more frequent modifications to the environment. Therefore, jointly training the three policies leads to efficient RL guided by a curriculum progressively improving the sparse reward and generalization. We compare our method with popular RL/planning approaches targeting similar problems and the ones with environment generators or adversarial agents. Thorough experiments on diverse benchmark tasks demonstrate significant advantages of our method on improving RL's efficiency and generalization.
One-sentence Summary: We propose a auto-curriculum for RL where a co-operative path planner, an adversarial environment generator and the RL are jointly trained to improve both the learning efficency and the robustness to the environment.
30 Replies

Loading