RoBERTa vs BERT for intent classificationDownload PDF

22 Mar 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: Intent classification is an essential task in nat- ural language processing, which aims to iden- tify the intention or purpose behind a user’s ut- terance. This task has become increasingly im- portant in the development of conversational agents and chatbots, as they need to under- stand user requests to provide relevant and accurate responses. In this paper1, we will look at which algorithm, between BERT and RoBERTA, seems more adapted to the pre- diction of dialogue act (DA) or sentiment and emotion (S/E). We will use the SILICONE dataset which seems to fit the task. It contains corpora including an utterance and the asso- ciated DA or S/E. We observe that RoBERTa outperforms BERT in prediction, especially for DA. For the mrda dataset, it even man- ages to reach an accuracy of 89%. For the pre- diction of S/E, its performance is better than BERT, nevertheless its predictivity rate is low.
0 Replies

Loading