Abstract: Dialogue act classification is a key task in natural language processing that involves identifying the intended purpose or function of a particular utterance in a conversation. In recent years, deep learning models like BERT have achieved state-of-the-art performance on this task. However, the performance of BERT can still be improved by incorporating other deep learning models. In this report, we present a comparison between the performance of BERT and a BERT-CNN-BiGRU-Attention Hybrid (BCBAH) model on the "dyda_da" dataset from the SILICONE dataset for dialogue act classification. The hybrid model combines the strengths of different deep learning models to improve the accuracy and efficiency of the task. We conducted experiments on the dataset to evaluate the performance of both models.
0 Replies
Loading