FourierKAN outperforms MLP on Text Classification Head Fine-tuning

Published: 10 Oct 2024, Last Modified: 23 Nov 2024FITML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Kolmogorov-Arnold Network, Fine-tuning, Linear Probing, Text Classification
TL;DR: Replacing MLPs with FourierKAN during linear probing improves text classification performance.
Abstract: In resource constraint settings, adaptation to downstream classification tasks involves fine-tuning the final layer of a classifier (i.e. classification head) while keeping the rest of the model weights frozen. Multi-Layer Perceptron (MLP) heads fine-tuned with pre-trained transformer backbones have long been the de facto standard for text classification head fine-tuning. However, the fixed non-linearity of MLPs often struggles to fully capture the nuances of contextual embeddings produced by pre-trained models, while also being computationally expensive. In our work, we investigate the efficacy of KAN and its variant, Fourier KAN (FR-KAN), as alternative text classification heads. Our experiments reveal that FR-KAN significantly outperforms MLPs with an average improvement of 10% in accuracy and 11% in F1-score across seven pre-trained transformer models and four text classification tasks. Beyond performance gains, FR-KAN is more computationally efficient and trains faster with fewer parameters. These results underscore the potential of FR-KAN to serve as a lightweight classification head, with broader implications for advancing other Natural Language Processing (NLP) tasks.
Submission Number: 22
Loading