Large Language Models are Better Logical Fallacy Reasoners with Counterargument, Goal, and Explanation-aware Prompt Formulation

ACL ARR 2024 April Submission47 Authors

11 Apr 2024 (modified: 23 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The advancement of Large Language Models (LLMs) like GPT-4 has significantly enhanced our capability to process complex language. However, accurately detecting and classifying logical fallacies—a crucial aspect of reasoning and argumentation—remains a challenging task. This study introduces a simple but powerful prompt formulation approach that can be leveraged for both zero-shot settings and fine-tuned models. Our proposed method formulates an input prompt by enriching the input text in view of counterarguments, explanations, and goals. The formulated prompts are used for providing an answer in the zero-shot setting or integrated into the training of existing Small Language Models (e.g., RoBERTa). Our experiments span diverse datasets, featuring 5 to 13 types of logical fallacies, to assess the method's robustness and adaptability with \textit{GPT-3.5-turbo} and \textit{GPT-4.0}, placing a particular emphasis on the impact of various query types. The findings reveal significant improvements across the board: for zero-shot settings, the method increased the Macro F1-score by up to 0.20 in detection tasks, while in multiclass classification tasks involving fine-tuned models, the Macro F1-score saw enhancements of up to 0.56.
Paper Type: Short
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: Large Language Model, Query, zero-shot, prompting
Languages Studied: English
Section 2 Permission To Publish Peer Reviewers Content Agreement: Authors grant permission for ACL to publish peer reviewers' content
Submission Number: 47
Loading