Can Language Models Make Fun? A Case Study in Chinese Comical CrosstalkDownload PDF

22 Sept 2022 (modified: 12 Mar 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: humor generation, Chinese crosstalk, pre-trained language model, GPT, natural language generation
Abstract: Language is the principal tool for human communication, in which humor is one of the most attractive parts. Producing natural language like humans using computers, a.k.a, Natural Language Generation (NLG), has been widely used for dialogue systems, chatbots, machine translation, as well as computer-aid creation e.g., idea generations, scriptwriting. However, the humor aspect of natural language is relatively under-investigated, especially in the age of pre-trained language models. In this work, we aim to preliminarily test whether \textit{NLG can generate humor as humans do}. We build a new dataset consisting of numerous digitized \textbf{C}hinese \textbf{C}omical \textbf{C}rosstalk scripts (called \textbf{C}$^3$ in short), which is for a popular Chinese performing art called `Xiangsheng' or `相声' since 1800s \footnote{For convenience for non-Chinese speakers, we called `crosstalk' for `Xiangsheng' in this paper.}. We benchmark various generation approaches including training-from-scratch Seq2seq, fine-tuned middle-scale PLMs, and large-scale PLMs (with and without fine-tuning). Moreover, we also conduct a human assessment, showing that 1) \textit{large-scale pretraining largely improves crosstalk generation quality}; and 2) \textit{ even the scripts generated from the best PLM is far from what we expect}. We conclude humor generation could be largely improved using large-scaled PLMs, but it is still in its infancy. The data and benchmarking code are publicly available in \url{https://github.com/anonNo2/crosstalk-generation}.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
TL;DR: Testing whether AI could make fun!
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 6 code implementations](https://www.catalyzex.com/paper/arxiv:2207.00735/code)
5 Replies

Loading