Keywords: Reinforcement Learning, Transfer Learning
Abstract: Reinforcement learning algorithms often suffer from poor sample efficiency, making them challenging to apply in multi-task or continual learning settings.
Efficiency can be improved by transferring knowledge from a previously trained teacher policy to guide exploration in new but related tasks.
However, if the new task sufficiently differs from the teacher’s training task, the transferred guidance may be sub-optimal and bias exploration toward low-reward behaviors.
We propose an energy-based transfer learning method that uses out-of-distribution detection to selectively issue guidance, enabling the teacher to intervene only in states within its training distribution.
We theoretically show that energy scores reflect the teacher’s state-visitation density and empirically demonstrate improved sample efficiency and performance across both single-task and multi-task settings.
Submission Number: 21
Loading