Predicting Face Acts with Pre-trained Language ModelsDownload PDF

Anonymous

16 Dec 2022 (modified: 05 May 2023)ACL ARR 2022 December Blind SubmissionReaders: Everyone
Abstract: Face is an individual's public image we seek to establish in human interaction, and face acts are speech acts that either positively or negatively affect faces.The current study employed conventional neural networks, although the model requires training to classify face acts in a specific domain, which results in a lack of generalizability.For two reasons, we attempt to classify face acts using GPT-3, a well-known pre-trained language model (PLM) that can solve various classification tasks with few-shot learning.First, we hypothesize GPT-3 to know what face acts are, and we hope to elicit that ability for the task with few-shot learning.Second, we assume that pre-training positively impacts face act classification, and we can see the effect by comparing fine-tuned GPT-3 with the previous model.Experiments reveal that we cannot elicit GPT-3's ability for this task with few-shot learning.However, we confirm that fine-tuned GPT-3 could outperform the previous study and maintain almost the same performance as the previous study, even with a quarter of the original training data.
Paper Type: long
Research Area: Dialogue and Interactive Systems
0 Replies

Loading