Hindsight Foresight Relabeling for Meta-Reinforcement LearningDownload PDF

12 Oct 2021 (modified: 08 Sept 2024)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: Reinforcement Learning, Meta-Learning
TL;DR: We present HFR, a relabeling method that can be applied to meta-reinforcement learning to boost sample efficiency and performance.
Abstract: Meta-reinforcement learning (meta-RL) algorithms allow for agents to learn new behaviors from small amounts of experience, mitigating the sample inefficiency problem in RL. However, while meta-RL agents can adapt quickly to new tasks at test time after experiencing only a few trajectories, the meta-training process is still sample-inefficient. Prior works have found that in the multi-task RL setting, relabeling past transitions and thus sharing experience among tasks can improve sample efficiency and asymptotic performance. We apply this idea to the meta-RL setting and devise a new relabeling method called Hindsight Foresight Relabeling (HFR). We construct a relabeling distribution using the combination of "hindsight", which is used to relabel trajectories using reward functions from the training task distribution, and "foresight", which takes the relabeled trajectories and computes the utility of each trajectory for each task. HFR is easy to implement and readily compatible with existing meta-RL algorithms. We find that HFR improves performance when compared to other relabeling methods on a variety of meta-RL tasks.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/hindsight-foresight-relabeling-for-meta/code)
0 Replies

Loading