No Data, No Optimization: A Lightweight Method To Disrupt Neural Networks With Sign-Flips

TMLR Paper6531 Authors

17 Nov 2025 (modified: 24 Nov 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep Neural Networks (DNNs) can be catastrophically disrupted by flipping only a handful of sign bits in their parameters. We introduce Deep Neural Lesion (DNL), a data-free, lightweight method that locates these critical parameters and triggers massive accuracy drops. We validate its efficacy on a wide variety of computer vision models and datasets. The method requires no training data or optimization and can be carried out via common exploits software, firmware or hardware based attack vectors. An enhanced variant that uses a single forward and backward pass further amplifies the damage beyond DNL's zero-pass approach. Flipping just two sign bits in ResNet50 on ImageNet reduces accuracy by 99.8%. We also show that selectively protecting a small fraction of vulnerable sign bits provides a practical defense against such attacks.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Wei_Liu3
Submission Number: 6531
Loading