everyone
since 09 Jun 2025">EveryoneRevisionsBibTeXCC BY 4.0
The vulnerability of neural networks to adversarial examples is an important concern in machine learning. Despite active research on attack and defense algorithms, we lack a clear understanding of the origin of this vulnerability. This study provides a theoretical analysis of the relationship between the architecture of neural networks and their robustness to adversarial attacks, focusing on linear Convolutional Neural Networks (CNNs). Using the theory of implicit biases in linear neural networks, we provide a mathematical characterization of how kernel size and network depth affect adversarial robustness, deriving upper and lower bounds that outline these relationships. Our experiments on popular image datasets align closely with the theoretical trends, allowing us to conclude that the robustness of linear CNN to adversarial attacks decreases with the kernel size and depth. Moreover, our theory strengthens the bridge between implicit bias and robustness, laying the groundwork to further explore robustness from this perspective.