Can DBNNs Robust to Environmental Noise for Resource-constrained Scenarios?

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Improve the robustness of DBNNs under environmental noise.
Abstract: Recently, the potential of lightweight models for resource-constrained scenarios has garnered significant attention, particularly in safety-critical tasks such as bio-electrical signal classification and B-ultrasound-assisted diagnostic. These tasks are frequently affected by environmental noise due to patient movement artifacts and inherent device noise, which pose significant challenges for lightweight models (e.g., deep binary neural networks (DBNNs)) to perform robust inference. A pertinent question arises: can a well-trained DBNN effectively resist environmental noise during inference? In this study, we find that the DBNN's robustness vulnerability comes from the binary weights and scaling factors. Drawing upon theoretical insights, we propose L1-infinite norm constraints for binary weights and scaling factors, which yield a tighter upper bound compared to existing state-of-the-art (SOTA) methods. Finally, visualization studies show that our approach introduces minimal noise perturbations at the periphery of the feature maps. Our approach outperforms the SOTA method, as validated by several experiments conducted on the bio-electrical and image classification datasets. We hope our findings can raise awareness among researchers about the environmental noise robustness of DBNNs.
Lay Summary: Recent advancements in lightweight AI models have sparked interest in their use for critical healthcare tasks—like analyzing bio-electrical signals or ultrasound images—especially in settings with limited computational resources. However, these models, such as deep binary neural networks (DBNNs), often struggle to perform reliably in real-world scenarios due to environmental noise caused by patient movements or device limitations. A key question emerges: Can these models maintain accuracy under such noisy conditions? Our study reveals that the vulnerability of DBNNs to noise stems from their simplified "binary" parameters and scaling factors. To address this, we propose a novel method that applies mathematical constraints (L1-infinity norms) to these components, significantly improving their noise resistance compared to existing techniques. Visual experiments confirm that our approach minimizes distortions at the edges of feature maps, a common source of errors. Tests on medical and image datasets demonstrate superior performance over current state-of-the-art methods.
Link To Code: https://gitee.com/wdz2020/robust-dbnn
Primary Area: Deep Learning->Robustness
Keywords: Deep binary neural networks, Environmental noise, Robustness
Submission Number: 6020
Loading