Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight MasksDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 PosterReaders: Everyone
Keywords: modularity, systematic generalization, compositionality
Abstract: Neural networks (NNs) whose subnetworks implement reusable functions are expected to offer numerous advantages, including compositionality through efficient recombination of functional building blocks, interpretability, preventing catastrophic interference, etc. Understanding if and how NNs are modular could provide insights into how to improve them. Current inspection methods, however, fail to link modules to their functionality. In this paper, we present a novel method based on learning binary weight masks to identify individual weights and subnets responsible for specific functions. Using this powerful tool, we contribute an extensive study of emerging modularity in NNs that covers several standard architectures and datasets. We demonstrate how common NNs fail to reuse submodules and offer new insights into the related issue of systematic generalization on language tasks.
One-sentence Summary: We develop a method for analyzing emerging functional modularity in neural networks based on differentiable weight masks and use it to point out important issues in current-day neural networks.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) RobertCsordas/modules](https://github.com/RobertCsordas/modules)
Data: [Mathematics Dataset](https://paperswithcode.com/dataset/mathematics), [SCAN](https://paperswithcode.com/dataset/scan)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2010.02066/code)
23 Replies

Loading