Fine Irony: Training Transformers with Ordinal Likelihood Labels

ACL ARR 2024 June Submission615 Authors

12 Jun 2024 (modified: 02 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this paper, we investigate a selection of methodologies for fine-tuning transformer-based classifiers for fine-grained irony (likelihood) detection in English Tweets. These methodologies include approaching irony detection as an ordinal classification task and as a regression task for varying label granularity (3, 5 and 7 labels). Our experiments show that training irony detection models using fine-grained likelihood labels is not only possible but also advantageous, as the models reach higher F1-scores for binary classification than models that are specifically trained for this task. In addition, we explore how well the predictions by each of the model setups can be interpreted through Layer Integrated Gradients. The results show that, although performance for irony detection is consistent, the selection of important words for well-performing models does not consistently align with human trigger word annotation.
Paper Type: Short
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: feature attribution, hardness of samples
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study
Languages Studied: English
Submission Number: 615
Loading