Eau De $Q$-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning

Published: 09 May 2025, Last Modified: 28 May 2025RLC 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep Reinforcement Learning, Sparse Training, Distillation
TL;DR: We introduce Eau De $Q$-Network, a dense-to-sparse reinforcement learning framework capable of adapting the sparsity schedule at the agent's learning pace while keeping high performances.
Abstract: Recent works have successfully demonstrated that sparse deep reinforcement learning agents can be competitive against their dense counterparts. This opens up opportunities for reinforcement learning applications in fields where the inference time and memory requirements are cost-sensitive or limited by hardware. Until now, dense-to-sparse methods rely on hand-designed sparsity schedules that are not synchronized with the agent's learning pace. Crucially, the final sparsity level is chosen as a hyperparameter, which requires careful tuning as setting it too high might lead to poor performances. In this work, we address these shortcomings by crafting a dense-to-sparse algorithm that we name *Eau De $Q$-Network* (EauDeQN). To increase sparsity at the agent's learning pace, we consider multiple online networks with different sparsity levels, where each online network is trained from a shared target network. At each target update, the online network with the smallest loss is chosen as the next target network, while the other networks are replaced by a pruned version of the chosen network. We evaluate the proposed approach on the Atari $2600$ benchmark and the MuJoCo physics simulator, showing that EauDeQN reaches high sparsity levels while keeping performances high.
Submission Number: 121
Loading