Keywords: Reinforcement Learning, Representation learning for planning, Meta-RL, Attention Mechanism, Contrastive Learning, Offline RL
Abstract: Meta-learning for offline reinforcement learning (OMRL) is an understudied problem with tremendous potential impact by enabling RL algorithms in many real-world applications. A popular solution to the problem is to infer task identity as augmented state using a context-based encoder, for which efficient learning of robust task representations remains an open challenge. In this work, we provably improve upon one of the SOTA OMRL algorithms, FOCAL, by incorporating intra-task attention mechanism and inter-task contrastive learning objectives, to robustify task representation learning against sparse reward and distribution shift. Theoretical analysis and experiments are presented to demonstrate the superior performance and robustness of our end-to-end and model-free framework compared to prior algorithms across multiple meta-RL benchmarks.
One-sentence Summary: A new offline meta-RL SOTA with provably robustified task inference module via intra-task attention mechanism and inter-task contrastive learning.
Supplementary Material: zip
21 Replies
Loading