Discovery of Natural Language Concepts in Individual Units of CNNsDownload PDF

Published: 21 Dec 2018, Last Modified: 14 Oct 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret. Especially, little is known about how they represent language in their intermediate layers. In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns. In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text. We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language.
Keywords: interpretability of deep neural networks, natural language representation
TL;DR: We show that individual units in CNN representations learned in NLP tasks are selectively responsive to natural language concepts.
Code: [![github](/images/github_icon.svg) seilna/CNN-Units-in-NLP](https://github.com/seilna/CNN-Units-in-NLP)
Data: [AG News](https://paperswithcode.com/dataset/ag-news)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/discovery-of-natural-language-concepts-in/code)
10 Replies

Loading