Automatic Rule Extraction from Long Short Term Memory NetworksDownload PDF

Jun 20, 2021 (edited Mar 03, 2017)ICLR 2017 conference submissionReaders: Everyone
  • TL;DR: We introduce a word importance score for LSTMs, and show that we can use it to replicate an LSTM's performance using a simple, rules-based classifier.
  • Abstract: Although deep learning models have proven effective at solving problems in natural language processing, the mechanism by which they come to their conclusions is often unclear. As a result, these models are generally treated as black boxes, yielding no insight of the underlying learned patterns. In this paper we consider Long Short Term Memory networks (LSTMs) and demonstrate a new approach for tracking the importance of a given input to the LSTM for a given output. By identifying consistently important patterns of words, we are able to distill state of the art LSTMs on sentiment analysis and question answering into a set of representative phrases. This representation is then quantitatively validated by using the extracted phrases to construct a simple, rule-based classifier which approximates the output of the LSTM.
  • Keywords: Natural language processing, Deep learning, Applications
  • Conflicts: fb.com, berkeley.edu
15 Replies

Loading