Understanding intermediate layers using linear classifier probes

Guillaume Alain, Yoshua Bengio

Nov 05, 2016 (modified: Nov 14, 2016) ICLR 2017 conference submission readers: everyone
  • Abstract: Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes'', where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems.
  • TL;DR: New useful concept of information to understand deep learning.
  • Conflicts: umontreal.com

Loading