Keywords: Sparse Training, Dynamic Sparse Training, Deep Reinforcement Learning, Pruning, Reinforcement Learning
TL;DR: This paper explores sparse training in DRL, finding that pruning outperforms RigL, but removing bias parameters enhances RigL’s stability and performance in high sparsity, underscoring the need for DRL-specific sparse training strategies.
Abstract: Deep neural networks have enabled remarkable progress in reinforcement learning across a variety of domains, yet advancements in model architecture, especially involving sparse training, remain under-explored.
Sparse architectures hold potential for reducing computational overhead in deep reinforcement learning (DRL), where prior studies suggest that parameter under-utilization may create opportunities for efficiency gains.
This work investigates adaptation of sparse training methods from supervised learning to DRL, specifically examining pruning and the RigL algorithm in value-based agents like DQN.
In our experiments across multiple Atari games, we study factors neglected in supervised sparse training which are of relevance to DRL, such as the impact of the bias parameter in high-sparsity regimes and the dynamics of dormant neurons under sparse conditions.
The results reveal that RigL, despite its adaptability in supervised contexts, under-performs relative to pruning in DRL.
Strikingly, removing bias parameters enhances RigL's performance, reduces dormant neurons and improves stability in high sparsity, while pruning suffers the opposite effect.
These empirical observations underscore the imperative to re-evaluate sparse training methodologies, particularly within the context of DRL paradigms.
The results elucidate the necessity for further investigation into the applicability of sparse training techniques across more expansive architectural frameworks and diverse environments.
Submission Number: 8
Loading