Abstract: In recent years, the demand for compact deep neural networks (DNN s) has increased consistently, driven by the necessity to deploy them in environments with limited resources such as mobile or embedded devices. Our work aims to tackle this challenge by proposing a combination of two techniques: sparsification and t ernarization o f network parameters. We extend the plain binarization by introducing a sparsification interval centered around O. The network parameters falling in this interval are set to 0 and effectively removed from the net-work topology. Despite the increased complexity required by the ternarization scheme compared to a binary quantizer, we obtain remarkable sparsity rates that yield parameter distri-butions with significantly compressible sources with entropy lower than 1 bits/symbol.
Loading