Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline

TMLR Paper424 Authors

11 Sept 2022 (modified: 28 Feb 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: We study task-agnostic continual reinforcement learning (TACRL) in which standard RL challenges are compounded with \emph{partial observability} stemming from task agnosticism, as well as additional difficulties of continual learning (CL), i.e., learning on a non-stationary sequence of tasks. Here we compare TACRL methods with their soft upper bounds prescribed by previous literature: multi-task learning (MTL) methods which do not have to deal with non-stationary data distributions, as well as task-aware methods, which are allowed to operate under \emph{full observability}. We consider a previously unexplored and straightforward baseline for TACRL, replay-based recurrent RL (3RL), in which we augment an RL algorithm with recurrent mechanisms to address partial observability and experience replay mechanisms to address catastrophic forgetting in CL. Studying empirical performance in a sequence of RL tasks, we find surprising occurrences of 3RL matching and overcoming the MTL and task-aware soft upper bounds. We lay out hypotheses that could explain this inflection point of continual and task-agnostic learning research. Our hypotheses are empirically tested in continuous control tasks via a large-scale study of the popular multi-task and continual learning benchmark Meta-World. By analyzing different training statistics including gradient conflict, we find evidence that 3RL's outperformance stems from its ability to quickly infer how new tasks relate with the previous ones, enabling forward transfer.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We want to thank the reviewers again for their constructive criticisms. We revised the manuscript in light of the recommendations. The main revisions are: - We updated and moved into the main text Hypothesis #4 (now Hypothesis #3) (F3W2) - We removed Figure 5’s cut-off and improved readability (F3W2) - We moved the gradient conflict metric argumentation in the main text (ox5E) - We clarified the parameter stability analysis & relation to forgetting (uYqb) - We clarified how the entropy-like metric to compute the parameter stability should be interpreted (ox5E) - We added a justification for our hyperparameters as well as an hyperparameter table (see App. D.1) (ox5E) All revisions are highlighted in blue.
Assigned Action Editor: ~Edward_Grefenstette1
Submission Number: 424
Loading