Scalar Invariant Networks with Zero Bias

Published: 29 Nov 2023, Last Modified: 29 Nov 2023NeurReps 2023 PosterEveryoneRevisionsBibTeX
Submission Track: Proceedings
Keywords: Geometric structure; Invariance; Zero Bias; Robustness; Expressive power.
TL;DR: We demonstrate that neural networks can exhibit intriguing properties when bias terms are ignored.
Abstract: Just like weights, bias terms are learnable parameters in many popular machine learning models, including neural networks. Biases are believed to enhance the representational power of neural networks, enabling them to tackle various tasks in computer vision. Nevertheless, we argue that biases can be disregarded for some image-related tasks such as image classification, by considering the intrinsic distribution of images in the input space and desired model properties from first principles. Our empirical results suggest that zero-bias neural networks can perform comparably to normal networks for practical image classification tasks. Furthermore, we demonstrate that zero-bias neural networks possess a valuable property known as scalar (multiplicative) invariance. This implies that the network's predictions remain unchanged even when the contrast of the input image is altered. We further extend the scalar invariance property to more general cases, thereby attaining robustness within specific convex regions of the input space. We believe dropping bias terms can be considered as a geometric prior when designing neural network architecture for image classification, which shares the spirit of adapting convolutions as the translational invariance prior.
Submission Number: 1
Loading