Are Binary Neural Networks Input Bit-Invariant? Optimizing Sensor-Model Systems with Mean Hassanat Distance

Published: 23 Sept 2025, Last Modified: 21 Nov 2025L2S PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Binary Neural Networks, Input bit-invariance, Low-bit sensor inputs, Coreset selection, Energy-efficient sensing, Mean Hassanat Distance, Sensor-model co-design
Abstract: Resource-constrained sensing systems must balance data quality and energy consumption, yet the interaction between low-bit sensor input and model architecture remains poorly understood. We present a systematic evaluation of how Binary Neural Networks (BNNs) and FP32 models respond to joint optimization of data volume (via coresets) and input bit-precision (1–8 bit quantization). Our experiments on CIFAR-10 reveal a surprising architectural difference: BNNs maintain near-constant accuracy across all input bit-depths, whereas FP32 models degrade predictably (84.7\% at 8-bit to 56.0\% at 1-bit). This bit-invariance property enables BNNs to operate with variable-quality sensors without retraining, critical for adaptive sensing scenarios. We identify 75\% of training data at 4-bit precision as an optimal operating point, achieving 87.5\% resource reduction. At this point, accuracy remains 82.60\% for FP32 models and 66.01\% for BNNs. Additionally, Mean Hassanat Distance (MHD) between original and compressed weights exhibits strong negative correlations for BNNs (-0.87 to -0.95 PLCC), revealing fundamental differences in how binary networks respond to compression. These findings provide practical guidelines for deploying neural networks in energy-constrained environments with dynamically varying sensor quality, advancing the co-design of sensors and models for efficient machine perception.
Submission Number: 21
Loading