Adaptive Method for Selecting Basis Functions in Kolmogorov-Arnold Networks for Magnetic Resonance Image Enhancement

Published: 01 Jan 2025, Last Modified: 24 Jul 2025Program. Comput. Softw. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: A way to improve the quality of magnetic resonance image processing using the Kolmogorov–Arnold networks for deep feature filtering in the convolutional neural network is studied. Recently proposed Kolmogorov–Arnold networks are inspired by the representation theorem of the same name from real analysis and approximation theory. It states that every multivariate continuous function on a compact set can be represented as a superposition of continuous single-variable functions. However, the need to use gradient descent while training the network imposes restrictions on the parameterization of such single-variable functions—they must be at least differentiable; for this reason, in practice, they are searched in a linear span of B‑Splines or some other differentiable basis functions. In this study, we propose an adaptive method for the selection of basis functions by the model itself during training thus mitigating the rule of thumb choice of the basis functions. The method is based on the self-attention mechanism successfully used in state-of-the-art transformer networks. The proposed approach is tested on magnetic resonance image enhancement for images taken from IXI dataset; and it demonstrates better average PSNR and TV values over the synthetic testing dataset. Without loss of generality, the system of basis functions included B-splines, Chebyshev polynomials and Hermite functions.
Loading