Intents Classification for Neural Text GenerationDownload PDF

21 Mar 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: The hype around OpenAI's ChatGPT has more than ever sparked interest in AI-based bots where labeling and classification of utterances are a centerpiece in order to improve user experience. Broadly, Dialogue Acts (DA) and Emotion/Sentiment (E/S) tasks are identified through sequence labeling systems that are trained in a supervised manner. In this work, we propose four encoder-decoder models to learn generic representations adapted to the spoken dialog, which we evaluate on six datasets of different sizes of a benchmark called Sequence labellIng Evaluation benChmark fOr spoken laNguagE benchmark (SILICONE). Designed models are represented with either a hierarchical encoder or non-hierarchical encoders both based on pre-trained transformers (BERT XLNet). We notice the failure of the models to learn some datasets due to their inherent properties but in general, the BERT-GRU architecture is the best model regarding accuracy.
0 Replies

Loading