Keywords: Deep Learning, KAN, MLP, Fourier
TL;DR: We propose the Kolmogorov-Arnold-Fourier Network (KAF), which integrates trainable Fourier features to solve the parameter explosion of KANs and improve high-frequency representation in high-dimensional tasks.
Abstract: Although Kolmogorov-Arnold Networks (KAN) based on the Kolmogorov-Arnold theorem possess strong theoretical expressiveness, they face severe scalability bottlenecks—specifically parameter explosion and difficulty in capturing high-frequency features—in high-dimensional tasks. To address these issues, we propose the Kolmogorov-Arnold-Fourier Network (KAF), which fundamentally redefines the KAN paradigm through spectral reparameterization. Our key contributions include: (1) proposing a fundamental basis transformation from the local, grid-based B-spline representation to a global, adaptive spectral representation. This shift changes the network's inductive bias, reducing parameter complexity from $O(G)$ to $O(1)$ while preserving expressiveness; (2) introducing trainable Random Fourier Features initialized via a spectral alignment strategy, which allows the model to break the smoothness limitation of fixed kernels and accurately capture high-frequency components; and (3) implementing an adaptive hybrid GELU-Fourier activation mechanism that progressively enhances frequency representation during training. Comprehensive experiments demonstrate the superiority of KAF across vision, NLP, audio, and PDE solving tasks, achieving state-of-the-art performance (e.g., 93.15\% on CIFAR-10) with significantly improved efficiency. We will release the source code in accordance with the review policy.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 805
Loading