On the Interplay Between Sparsity and Training in Deep Reinforcement Learning

TMLR Paper3993 Authors

16 Jan 2025 (modified: 12 Apr 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We study the benefits of different sparse architectures for deep reinforcement learning. In particular, we focus on image-based domains where spatially-biased structures are common, such as those provided by convolutional nets. Using these and several other architectures of equal capacity, we show that sparse structure has a significant effect on learning performance. We also observe that choosing the best sparse architecture for a given domain depends on whether the hidden layer weights are fixed or learned.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: This revised version of our work addressea the reviewer's comments and feedback. To make the review easier, the revised sections are in red. As supplementary material, we attached a zip file with our code implementation, which was anonymized.
Assigned Action Editor: ~Amir-massoud_Farahmand1
Submission Number: 3993
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview