HKAN: Hierarchical Kolmogorov-Arnold Networks for Efficient and Interpretable Feature Interaction Modeling
Keywords: Feature Interaction Modeling, Kolmogorov-Arnold Networks, Interpretable Machine Learning, Tabular Data, Function Fitting
Abstract: Learning complex feature interactions is central to modern machine learning, driving breakthrough performance across domains from structured data analytics to predictive modeling in recommender systems and beyond.
However, despite notable progress, this field still faces three substantial challenges:
i) extensive manual predefinition is applied without automatic adaptation to specific datasets; ii) the 'black-box' nature of deep neural
networks with poor explainability of the learned interaction patterns; iii) computational inefficiency due to parameter-heavy architectures with limited scalability.
To address these challenges, we propose a unified framework, namely Hierarchical Kolmogorov-Arnold Network (HKAN), for efficient and interpretable feature interaction modeling with three key aspects:
i) factor-quality-guided evolutionary architecture
search (FG-EAS) to automatically discover data-centric optimal feature grouping
strategies;
ii) hierarchical sparse structure with superior parameter efficiency
iii) B-spline-based univariate function visualization and hierarchical factor structures with end-to-end interpretability
from local to global levels.
To test the predictive and symbolic regression ability of HKAN, we conduct experiments across 10 tabular learning and 2 function fitting tasks. HKAN achieves state-of-the-art (SOTA) or highly competitive performance on the vast majority of datasets while utilizing significantly fewer parameters. Notably, on three of these datasets, it reaches state-of-the-art performance with less than 10\% of the parameters used by the baseline models. Moreover, HKAN can serve as a knowledge discovery tool with excellent explainability (e.g., explicit formulas of data patterns) compared to other black-box baselines, which represents a significant step toward building more trustworthy and accountable AI systems.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 17182
Loading