- Original Pdf: pdf
- Keywords: reinforcement, learning, chromatic, networks, partitioning, efficient, neural, architecture, search, weight, sharing, compactification
- TL;DR: We show that ENAS with ES-optimization in RL is highly scalable, and use it to compactify neural network policies by weight sharing.
- Abstract: We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way. By defining the combinatorial search space of NAS to be the set of different edge-partitionings (colorings) into same-weight classes, we represent compact architectures via efficient learned edge-partitionings. For several RL tasks, we manage to learn colorings translating to effective policies parameterized by as few as 17 weight parameters, providing >90 % compression over vanilla policies and 6x compression over state-of-the-art compact policies based on Toeplitz matrices, while still maintaining good reward. We believe that our work is one of the first attempts to propose a rigorous approach to training structured neural network architectures for RL problems that are of interest especially in mobile robotics with limited storage and computational resources.