Lossless hardening with $\partial\mathbb{B}$ nets

Published: 20 Jun 2023, Last Modified: 17 Sept 2023Differentiable Almost EverythingEveryoneRevisionsBibTeX
Keywords: machine learning, differentiable, activation functions
TL;DR: $\partial\mathbb{B}$ nets are differentiable neural networks that learn discrete boolean-valued functions by gradient descent without approximation.
Abstract: $\partial\mathbb{B}$ nets are differentiable neural networks that learn discrete boolean-valued functions by gradient descent. $\partial\mathbb{B}$ nets have two semantically equivalent aspects: a differentiable soft-net, with real weights, and a non-differentiable hard-net, with boolean weights. We train the soft-net by backpropagation and then "harden" the learned weights to yield boolean weights that bind with the hard-net. The result is a learned discrete function. Unlike existing approaches to neural network binarization the "hardening" operation involves no loss of accuracy. Preliminary experiments demonstrate that $\partial\mathbb{B}$ nets achieve comparable performance on standard machine learning problems yet are compact (due to 1-bit weights) and interpretable (due to the logical nature of the learnt functions).
Submission Number: 21
Loading