Unit-level surprise in neural networksDownload PDF

Sep 22, 2021ICBINB@NeurIPS2021 SpotlightReaders: Everyone
  • TL;DR: We explore the utility of unit-level surprise in neural networks for rapid adaptation to new data and learning modular networks.
  • Abstract: To adapt to changes in real-world data distributions, neural networks must update their parameters. We argue that unit-level surprise should be useful for: (i) determining which few parameters should update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to transfer. We empirically validate (i) in simple settings and reflect on the challenges and opportunities of realizing both (i) and (ii) in more general settings.
  • Keywords: surprise, deep learning, domain adaptation, OOD detection, meta-learning, biologically-inspired
  • Category: Negative result: I would like to share my insights and negative results on this topic with the community, Stuck paper: I hope to get ideas in this workshop that help me unstuck and improve this paper
0 Replies