Keywords: Network Compression
TL;DR: We advance the state-of-the-art in model compression by proposing Atomic Compression Networks (ACNs), a novel architecture that is constructed by recursive repetition of a small set of neurons.
Abstract: Compressed forms of deep neural networks are essential in deploying large-scale
computational models on resource-constrained devices. Contrary to analogous
domains where large-scale systems are build as a hierarchical repetition of small-
scale units, the current practice in Machine Learning largely relies on models with
non-repetitive components. In the spirit of molecular composition with repeating
atoms, we advance the state-of-the-art in model compression by proposing Atomic
Compression Networks (ACNs), a novel architecture that is constructed by recursive
repetition of a small set of neurons. In other words, the same neurons with the
same weights are stochastically re-positioned in subsequent layers of the network.
Empirical evidence suggests that ACNs achieve compression rates of up to three
orders of magnitudes compared to fine-tuned fully-connected neural networks (88×
to 1116× reduction) with only a fractional deterioration of classification accuracy
(0.15% to 5.33%). Moreover our method can yield sub-linear model complexities
and permits learning deep ACNs with less parameters than a logistic regression
with no decline in classification accuracy.
Code: https://drive.google.com/open?id=1weAzCzlI4p0L9vXsBI12LfNMlfGZdjd_
Original Pdf: pdf
8 Replies
Loading