TL;DR: We explore the utility of unit-level surprise in neural networks for rapid adaptation to new data and learning modular networks.
Abstract: To adapt to changes in real-world data distributions, neural networks must update their parameters. We argue that unit-level surprise should be useful for: (i) determining which few parameters should update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to transfer. We empirically validate (i) in simple settings and reflect on the challenges and opportunities of realizing both (i) and (ii) in more general settings.
Keywords: surprise, deep learning, domain adaptation, OOD detection, meta-learning, biologically-inspired
Category: Stuck paper: I hope to get ideas in this workshop that help me unstuck and improve this paper