Comparing BERT and a BCBAH model for Dialogue act classificationDownload PDF

20 Mar 2023 (modified: 20 Mar 2023)OpenReview Archive Direct UploadReaders: Everyone
Abstract: Dialogue act classification is a key task in natural language processing that involves identifying the intended purpose or function of a particular utterance in a conversation. In recent years, deep learning models like BERT have achieved state-of-the-art performance on this task. However, the performance of BERT can still be improved by incorporating other deep learning models. In this report, we present a comparison between the performance of BERT and a BERT-CNN-BiGRU-Attention Hybrid (BCBAH) model on the "dyda_da" dataset from the SILICONE dataset for dialogue act classification. The hybrid model combines the strengths of different deep learning models to improve the accuracy and efficiency of the task. We conducted experiments on the dataset to evaluate the performance of both models.
0 Replies

Loading