Fine-Tuning BERT For Monolingual Intent ClassificationDownload PDF

Anonymous

19 Mar 2023OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: Intent classification is a crucial task in Natural Language Understanding, with numerous applications in chatbots, virtual assistants, and other conversa- tional Artificial Intelligence (AI) sys- tems. Recently, deep learning mod- els, particularly pre-trained language models such as BERT, have achieved state-of-the-art results in various Nat- ural Language Processing tasks, in- cluding intent classification. In this study, we explore the effectiveness of fine-tuning BERT using three differ- ent architectures for both single- and multi-target intent classification tasks: BertMLPLayer1, BertMLPLayer2, and BertGRU. We conduct experiments on the SILICONE datasets and achieve ex- cellent results on single-target intent classification, with BertGRU outper- forming the other two methods and pre- vious benchmarks on the same datasets. However, our experiments on multi- target intent classification tasks did not yield satisfactory results.
0 Replies

Loading