Keywords: Reinforcement learning, Meta reinforcement learning, Transfer meta reinforcement learning
TL;DR: A novel method for sample efficiently policy adaptation to out-of-distribution tasks
Abstract: Reinforcement learning (RL) has achieved impressive results across domains, yet learning an optimal policy typically requires extensive interaction data, limiting practical deployment. A common remedy is to leverage priors—such as pre-collected datasets or reference policies—but their utility degrades under task mismatch between training and deployment. While prior work has sought to address this mismatch, it has largely been restricted to in-distribution settings. To address this challenge, we propose $\textbf{A}$daptive $\textbf{P}$olicy $\textbf{B}$ackbone (APB), a meta-transfer RL method that inserts lightweight linear layers before and after a shared backbone, thereby enabling parameter-efficient fine-tuning (PEFT) while preserving prior knowledge during adaptation. Our results show that APB improves sample efficiency over standard RL and adapts to out-of-distribution (OOD) tasks where existing meta-RL baselines typically fail.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 19129
Loading