Leaky Tiling Activations: A Simple Approach to Learning Sparse Representations OnlineDownload PDF

28 Sep 2020 (modified: 14 Jan 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: Reinforcement learning, sparse representation, leaky tiling activation functions
  • Abstract: Recent work has shown that sparse representations---where only a small percentage of units are active---can significantly reduce interference. Those works, however, relied on relatively complex regularization or meta-learning approaches, that have only been used offline in a pre-training phase. We design an activation function that naturally produces sparse representations, and so is more amenable to online training. The idea relies on the simple approach of binning, but overcomes the two key limitations of binning: zero gradients for the flat regions almost everywhere, and lost precision---reduced discrimination---due to coarse aggregation. We introduce a Leaky Tiling Activation (LTA) that provides non-negligible gradients and produces overlap between bins that improves discrimination. We first show that LTA is robust under covariate shift in a synthetic online supervised problem, where we can vary the level of correlation and drift. Then we move to deep reinforcement learning setting and investigate both value-based and policy gradient algorithms that use neural networks with LTAs, in classic discrete control and Mujoco continuous control environments. We show that algorithms equipped with LTAs are able to learn a stable policy faster without needing target networks on most domains.
  • One-sentence Summary: A simple and efficient way to learn sparse feature in deep learning setting.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Supplementary Material: zip
11 Replies

Loading