MOCK: Can LLMs Really Understand Humor-Sarcasm?

ACL ARR 2024 June Submission5323 Authors

16 Jun 2024 (modified: 24 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have demonstrated the capacity to engage in interaction with humans, employing humor and sarcasm. However, their true comprehension of humor and sarcasm remains a subject of inquiry. This work introduces the hu$\textbf{M}$or-sarcasm c$\textbf{O}$mprehension ben$\textbf{C}$hmar$\textbf{K}$, named MOCK, to systematically evaluate LLMs' abilities to detect, match, and explain humor-sarcasm across diverse scenes, including cartoon, post, and comedy. Our comprehensive assessment reveals significant gap between the performance of LLMs and human on humor-sarcasm comprehension. To bridge this gap, we propose a Chain-of-Task approach that integrates the three comprehension sub-tasks (\ie detecting, matching and explaining), leveraging their interrelatedness to enhance humor-sarcasm comprehension. Additionally, we propose a novel humor-sarcasm generation task and explore the potential of MOCK to improve LLMs' humor-sarcasm generation capabilities. The evaluation results verify that humor-sarcasm comprehension can significantly enhance humor-sarcasm generation.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: image text matching; vision question answering; multimodality
Contribution Types: Data analysis
Languages Studied: English
Submission Number: 5323
Loading