TL;DR: We train neural networks to be more modular and interpretable by using an enmeshment loss that promotes clusterability.
Abstract: An approach to improve network interpretability is via clusterability, i.e., splitting a model into disjoint clusters that can be studied independently. We find pretrained models to be highly unclusterable and thus train models to be more modular using an "enmeshment loss" function that encourages the formation of non-interacting clusters. Using automated interpretability measures, we show that our method finds clusters that learn different, disjoint, and smaller circuits for CIFAR-10 labels. Our approach provides a promising direction for making neural networks easier to interpret and thereby control.
Style Files: I have used the style files.
Submission Number: 21
Loading