S3TA: A Soft, Spatial, Sequential, Top-Down Attention ModelDownload PDF

27 Sept 2018, 22:36 (modified: 21 Dec 2018, 01:23)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Keywords: Attention, RL, Top-Down, Interpretability
TL;DR: http://sites.google.com/view/s3ta
Abstract: We present a soft, spatial, sequential, top-down attention model (S3TA). This model uses a soft attention mechanism to bottleneck its view of the input. A recurrent core is used to generate query vectors, which actively select information from the input by correlating the query with input- and space-dependent key maps at different spatial locations. We demonstrate the power and interpretabilty of this model under two settings. First, we build an agent which uses this attention model in RL environments and show that we can achieve performance competitive with state-of-the-art models while producing attention maps that elucidate some of the strategies used to solve the task. Second, we use this model in supervised learning tasks and show that it also achieves competitive performance and provides interpretable attention maps that show some of the underlying logic in the model's decision making.
8 Replies