Bit-wise Training of Neural Network WeightsDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: quantization, pruning, bit-wise training, resnet, lenet
Abstract: We propose an algorithm where the individual bits representing the weights of a neural network are learned. This method allows training weights with integer values on arbitrary bit-depths and naturally uncovers sparse networks, without additional constraints or regularization techniques. We show better results than the standard training technique with fully connected networks and similar performance as compared to standard training for residual networks. By training bits in a selective manner we found that the biggest contribution to achieving high accuracy is given by the first three most significant bits, while the rest provide an intrinsic regularization. As a consequence we show that more than 90% of a network can be used to store arbitrary codes without affecting the its accuracy. These codes can be random noise, binary files or even the weights of previously trained networks.
One-sentence Summary: We present an algorithm which allows training of a neural network's weights in a bit-wise fashion and show that 10% of the most significant bits contribute to the classification accuracy, while the rest can be random or used to store arbitrary codes.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2202.09571/code)
11 Replies

Loading