What makes vision transformers robust towards bit-flip attack?

ICLR 2024 Workshop ME-FoMo Submission75 Authors

Published: 04 Mar 2024, Last Modified: 05 May 2024ME-FoMo 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: bit-flip attack, neural network security, vision transformer
TL;DR: This paper is a study on the vulnerabilities of vision transformers towards bit-flip attack.
Abstract: The bit-flip attack (BFA) is a well-studied assault that can dramatically degrade the accuracy of a machine learning model by flipping a small number of bits in the model parameters. Numerous studies have focused on enhancing the performance of BFA and mitigating their effects on traditional Convolutional Neural Networks (CNNs). However, there remains a lack of understanding regarding the security of vision transformers against BFA. In our work, we conduct various experiments on vision transformer models and discover that the flipped bits are concentrated in the MLP layers, specifically in the initial and final several blocks. Furthermore, we find an inverse relationship between the size of the transformer model and its robustness. Our findings in this study can aid in refining defense techniques, targeting them towards areas in vision transformer models that are particularly vulnerable to BFA.
Submission Number: 75
Loading