Adversarial Inputs for Linear Algebra Backends

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Linear algebra is a cornerstone of neural network inference. The efficiency of popular frameworks, such as TensorFlow and PyTorch, critically depends on backend libraries providing highly optimized matrix multiplications and convolutions. A diverse range of these backends exists across platforms, including Intel MKL, Nvidia CUDA, and Apple Accelerate. Although these backends provide equivalent functionality, subtle variations in their implementations can lead to seemingly negligible differences during inference. In this paper, we investigate these minor discrepancies and demonstrate how they can be selectively amplified by adversaries. Specifically, we introduce *Chimera examples*, inputs to models that elicit conflicting predictions depending on the employed backend library. These inputs can even be constructed with integer values, creating a vulnerability exploitable from real-world input domains. We analyze the prevalence and extent of the underlying attack surface and propose corresponding defenses to mitigate this threat.
Lay Summary: Neural networks rely heavily on math operations, such as multiplying large tables of numbers. To run these calculations efficiently, machine learning systems use specialized software (called linear algebra backends) that are tailored to different types of hardware, like Intel processors, Nvidia graphics cards, or Apple devices. While all these backends perform the same basic computations, they do so in slightly different ways. These tiny differences usually don't matter. In this paper, however, we show that attackers can exploit them. We introduce what we call Chimera examples: carefully crafted inputs that make a neural network produce different results depending on the backend it uses. We examine how often these differences appear, how impactful they can be, and how to defend against this kind of vulnerability.
Link To Code: https://github.com/mlsec-group/dila
Primary Area: Deep Learning->Robustness
Keywords: adversarial inputs, linear algebra backends, floating-point errors
Submission Number: 6660
Loading