When Vision Fails: Text Attacks Against ViT and OCR

TMLR Paper3324 Authors

10 Sept 2024 (modified: 12 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Text-based machine learning models are vulnerable to an emerging class of Unicode-based adversarial examples capable of tricking a model into misreading text with potentially disastrous effects. The primary existing defense against these attacks is to preprocess potentially malicious text inputs using optical character recognition (OCR). In theory, OCR models will ignore any malicious Unicode characters and will extract the visually correct input to be fed to the model. In this work, we show that these visual defenses fail to prevent this type of attack. We use a genetic algorithm to generate visual adversarial examples (i.e., OCR outputs) in a black-box setting, demonstrating a highly effective novel attack that substantially reduces the accuracy of OCR and other visual models. Specifically, we use the Unicode functionality of combining characters (e.g., $\tilde{n}$ which combines the characters $n$ and $\sim$) to manipulate text inputs so that small visual perturbations appear when the text is displayed. We demonstrate the effectiveness of these attacks in the real world by creating adversarial examples against production models published by Meta, Microsoft, IBM, and Google. We additionally conduct a user study to establish that the model-fooling adversarial examples do not affect human comprehension of the text, showing that language models are uniquely vulnerable to this type of text attack.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=NN722gaBeP
Changes Since Last Submission: Corrected font which diverged from TMLR requirements leading to desk rejection.
Assigned Action Editor: ~Hsuan-Tien_Lin1
Submission Number: 3324
Loading