Transformer models have revolutionized machine learning, and the theoretical underpinnings behind their success are only now starting to be explored. In this work, we analyze the performance and robustness of transformers by considering the attention mechanism as a graph operator, focusing on the geometry of attention maps viewed as weighted graphs. Specifically, we investigate the role of Ricci curvature, a metric closely tied to graph spectral properties and system robustness, in shaping the training dynamics and robustness of transformers. Our theoretical analysis establishes a link between Ricci curvature and the convergence of gradient descent on transformers, and consequently, their training and fine tuning. We also show that a higher frequency of more positive values in the Ricci curvature distribution of attention graphs, therefore more system robustness, leads to more robust transformers, highlighting the impact of curvature on the robustness of transformers. Leveraging these insights, we propose an efficient regularization method to train curvature-adjusted transformers. Supporting our theoretical findings, experiments show that our proposed attention curvature manipulation can improve the learning speed, performance, or generalizability of vision and language transformers. Additionally, our observations point to a trade-off between their performance and robustness. This work demonstrates that the geometry of the attention map provides a theoretically elegant and computationally versatile framework for analyzing and manipulating transformers training, generalization, performance, and robustness, opening new avenues for designing models using geometric concepts.
Keywords: Transformers, Attention, Geometry, Robustness
TL;DR: We link the Ricci curvature of attention maps, a measure of system robustness, to transformer training and robustness. We also introduce a method for adjusting curvature and the geometry of the attention map to influence transformer's behavior.
Abstract:
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3996
Loading