Abstract: In this paper, we introduce a method to compress intermediate feature maps of deep neural networks (DNNs) to decrease memory storage and bandwidth requirements during inference. Unlike previous works, the proposed method is based on converting fixed-point activations into vectors over the smallest GF(2) finite field followed by nonlinear dimensionality reduction (NDR) layers embedded into a DNN. Such an end-to-end learned representation finds more compact feature maps by exploiting quantization redundancies within the fixed-point activations along the channel or spatial dimensions. We apply the proposed network architecture to the tasks of ImageNet classification and PASCAL VOC object detection. Compared to prior approaches, the conducted experiments show a factor of 2 decrease in memory requirements with minor degradation in accuracy while adding only bitwise computations.
TL;DR: Feature map compression method that converts quantized activations into binary vectors followed by nonlinear dimensionality reduction layers embedded into a DNN
Keywords: feature map, representation, compression, quantization, finite-field
Code: [![github](/images/github_icon.svg) gudovskiy/fmap_compression](https://github.com/gudovskiy/fmap_compression)
Data: [ssd](https://paperswithcode.com/dataset/openlane-v1)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/dnn-feature-map-compression-using-learned/code)
8 Replies
Loading