Kolmogorov-Arnold Networks with Variable Function Basis

17 Sept 2024 (modified: 26 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Kolmogorov Arnold Networks; variety function basis; interpretability; Weierstrass Approximation Theorem; Bernstein polynomial; multivarious time series forecasting; image for classification; learn the correct univariate functions.
Abstract: \begin{abstract} Neural networks exhibit exceptional performance in processing complex data, yet their internal structures remain largely unexplored. The emergence of Kolmogorov-Arnold Networks (KANs) represents a significant departure from traditional Multi-Layer Perceptrons (MLPs). In contrast to MLPs, KANs replace fixed activation functions at nodes (``neurons") with learnable activation functions on edges (``weights"), enhancing both accuracy and interpretability. As data evolves, the demand for models that are both flexible and robust minimizing the influence of input data variability continues to grow. Addressing this need, we propose a general framework for KANs utilizing a \underline{\textbf{V}}ariety \underline{\textbf{B}}ernstei\underline{\textbf{n}} Polynomial Function Basis for \underline{\textbf{K}}olmogorov-\underline{\textbf{A}}rnold \underline{\textbf{N}}etworks (VBn-KAN). This framework leverages the Weierstrass approximation theorem to extend function basis within KANs in theory, specifically selecting Bernstein polynomials ($B_n$) for their robustness, assured by the uniform convergence proposition. Additionally, to enhance flexibility, we implement techniques to vary the function basis $B_n$ when handling diverse datasets. Comprehensive experiments across three fields: multivariate time series forecasting, computer vision, and function approximation—demonstrate that our method outperforms conventional approaches and other variants of KANs. \end{abstract}
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1260
Loading