NAM: Normalization-based Attention ModuleDownload PDF

Published: 24 Nov 2021, Last Modified: 22 Oct 2023ImageNet PPF 2021Readers: Everyone
Abstract: Recognizing less salient features is the key for model compression. However, it has not been investigated in the revolutionary attention mechanisms. In this work, we propose a novel normalization-based attention module (NAM), which suppresses less salient weights. It applies a weight sparsity penalty to the attention modules, thus, making them more computational efficient while retaining similar performance. A comparison with three other attention mechanisms on both Resnet and Mobilenet indicates that our method results in higher accuracy. Code for this paper can be publicly accessed at \url{https://github.com/Christian-lyc/NAM}.
Submission Track: Extended abstract track, 3 pages max
Poster: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2111.12419/code)
1 Reply

Loading