FunLMs: Methods for Fine-tuning LLMs to Generate Humor

ACL ARR 2024 June Submission4956 Authors

16 Jun 2024 (modified: 08 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper explores advancements in computational humor through the fine-tuning of large language models (LLMs) on curated datasets of English and Russian jokes. Experiments with fine-tuning on humorous data are conducted to see if generative humor is a viable idea. Different LLMs, sampling techniques and data curation are implemented for enhancing the coherence and effectiveness of humor generation providing a light computational approach for such tasks. We test the proposed methods on different formats and languages and conclude that generating humorous texts with LLMs is feasible though it requires substantial efforts in data preparation and evaluation.
Paper Type: Short
Research Area: Generation
Research Area Keywords: computational humor, generative models, sampling
Contribution Types: NLP engineering experiment
Languages Studied: English, Russian
Submission Number: 4956
Loading