Abstract: Speech Act Classification, which consists on determining the communicative intent of an utterance, has been investigated widely over the
past few years as a standalone task due to the tremendous growth of NLP-based systems and AI chat assistants such as ChatGPT. It aims to classify an utterance with respect to the function it serves in a dialogue, i.e. the act the speaker is performing. In this paper, we focus on building a dialogue act classifier using the Meeting Recorder Dialogue Act Corpus (MRDA). We approach the problem as a sequence labeling task by using a recurrent neural networks model bidirectional LSTM (BiLSTM) with different embedding models. we
add also a context-aware self-attention property and compare results with baseline performance in literature. We notice an increase
in model accuracy using the BERT encoding model with self-attention and context-aware mechanism.
0 Replies
Loading