Boosting Soft Q-Learning by Bounding

Published: 14 May 2024, Last Modified: 25 Aug 2024Reinforcement Learning Conference 2024EveryoneCC BY 4.0
Abstract: An agent’s ability to leverage past experience is critical for efficiently solving new tasks. Prior work has focused on using value function estimates to obtain zero-shot approximations for solutions to a new task. In soft $Q$-learning, we show how any value function estimate can also be used to derive double-sided bounds on the optimal value function. The derived bounds lead to new approaches for boosting training performance which we validate experimentally. Notably, we find that the proposed framework suggests an alternative method for updating the $Q$-function, leading to boosted performance.
Loading