Keywords: Kolmogorov-Arnold Network; Transformer
TL;DR: In this paper, we introduce the Kolmogorov–Arnold Transformer (KAT), a novel architecture that replaces MLP layers with Kolmogorov-Arnold Network (KAN) layers to enhance the expressiveness and performance.
Abstract: Transformers stand as the cornerstone of mordern deep learning. Traditionally, these models rely on multi-layer
perceptron (MLP) layers to mix the information between channels. In this paper, we introduce the Kolmogorov–Arnold
Transformer (KAT), a novel architecture that replaces MLP layers with Kolmogorov-Arnold Network (KAN) layers to
enhance the expressiveness and performance of the model. Integrating KANs into transformers, however, is no easy
feat, especially when scaled up. Specifically, we identify three key challenges: (C1) Base function. The standard B-spline
function used in KANs is not optimized for parallel computing on modern hardware, resulting in slower inference speeds.
(C2) Parameter and Computation Inefficiency. KAN requires a unique function for each input-output pair, making the
computation extremely large. (C3) Weight initialization. The initialization of weights in KANs is particularly challenging
due to their learnable activation functions, which are critical for achieving convergence in deep neural networks. To
overcome the aforementioned challenges, we propose three key solutions: (S1) Rational basis. We replace B-spline functions
with rational functions to improve compatibility with modern GPUs. By implementing this in CUDA, we achieve faster
computations. (S2) Group KAN. We share the activation weights through a group of neurons, to reduce the computational
load without sacrificing performance. (S3) Variance-preserving initialization. We carefully initialize the activation weights
to make sure that the activation variance is maintained across layers. With these designs, KAT scales effectively and readily
outperforms traditional MLP-based transformers. We demonstrate the advantages of KAT across various tasks, including
image recognition, object detection, and semantic segmentation. It consistently enhances performance over the standard
transformer architectures of different model sizes.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2086
Loading