Abstract: The increasing demand for deep neural networks (DNNs) in resource-constrained systems propels the interest in heavily quantized architectures such as networks with binarized weights. However, despite huge progress in the field, the gap with full-precision performance is far from closed. Today's most effective methods for quantization are rooted in proximal gradient descent theory. In this work, we propose ConQ, a novel concave regularization approach to train effective DNNs with binarized weights. Motivated by theoretical investigation, we argue that the proposed concave regularizer, which allows the removal of the singularity point at 0, presents a more effective shape than previously considered models in terms of accuracy and convergence rate. We present a theoretical convergence analysis of ConQ, with specific i nsights o n b oth c onvex a nd n on-convex s ettings. An extensive experimental evaluation shows that ConQ outperforms the accuracy of competing regularization methods for networks with binarized weights.
Loading