Investigating Dialogue Act Classification through Cross-Corpora Fine-Tuning of Pretrained Language ModelsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: Fine-tuning pre-trained language models (PLMs) has achieved significant performance improvements in natural language understanding tasks such as dialogue act classification. However, most of these models are evaluated and benchmarked on standard datasets, and often do not perform well in practical, real-world scenarios such as our scenario of interest: dialogues of collaborative human learning, in which two learners work together to solve a problem in a classroom. To address this challenging scenario, we fine-tuned variants of the RoBERTa and LLaMA-2 models for dialogue act classification within using cross-corpora model fine-tuning approaches on two corpora of collaborative learning dialogues. Our experiments show that fine-tuning PLMs using cross-corpora approaches has the potential to improve classification performance, especially when a corpus has limited representation of certain dialogue acts. This work highlights the potential of using this approach for future domain-specific dialogue act classification tasks.
Paper Type: short
Research Area: NLP Applications
Contribution Types: Data analysis
Languages Studied: N/A
0 Replies

Loading