Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMsDownload PDF

Published: 23 Feb 2018, Last Modified: 15 Sept 2024ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships. Consequently, our inability to describe these relationships has led to LSTMs being characterized as black boxes. To this end, we introduce contextual decomposition (CD), an interpretation algorithm for analysing individual predictions made by standard LSTMs, without any changes to the underlying model. By decomposing the output of a LSTM, CD captures the contributions of combinations of words or variables to the final prediction of an LSTM. On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction. Using the phrase-level labels in SST, we also demonstrate that CD is able to successfully extract positive and negative negations from an LSTM, something which has not previously been done.
TL;DR: We introduce contextual decompositions, an interpretation algorithm for LSTMs capable of extracting word, phrase and interaction-level importance score
Keywords: interpretability, LSTM, natural language processing, sentiment analysis, interactions
Code: [![github](/images/github_icon.svg) jamie-murdoch/ContextualDecomposition](https://github.com/jamie-murdoch/ContextualDecomposition) + [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=rkRwGg-0Z)
Data: [SST](https://paperswithcode.com/dataset/sst)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/beyond-word-importance-contextual/code)
15 Replies

Loading