Abstract: Deep reinforcement learning (DRL) has been widely
adopted in various applications, yet it faces practical limitations
due to high storage and computational demands. Dynamic sparse
training (DST) has recently emerged as a prominent approach to
reduce these demands during training and inference phases, but
existing DST methods achieve high sparsity levels by sacrificing
policy performance as they rely on the absolute magnitude of
connections for pruning and randomly generating connections.
Addressing this, our study presents a generic method that can
be seamlessly integrated into existing DST methods in DRL to
enhance their policy performance while preserving their sparsity
levels. Specifically, we develop a novel method for calculating
the importance of connections within the model. Subsequently,
we dynamically adjust the sparse network topology by dropping
existing connections and introducing new connections based on
their respective importance values. Through validation on eight
widely used simulation tasks, our method improves two state-ofthe-
art (SOTA) DST approaches by up to 70% in episode return
and average return across all episodes under various sparsity
levels.
Loading