Abstract: Pioneering developments in large-scale language models (LLMs) have marked a substantial stride in their ability to comprehend multifaceted debate topics and to construct argumentative narratives. Despite this progress, there remains a notable lack of scholarly understanding of the processes by which LLMs engage with and analyze computational arguments. Classical studies have delved into the linguistic frameworks of arguments, encapsulating their essence within the realms of structural organization and logical coherence. Yet, it remains unclear whether LLMs utilize these recognized frameworks in addressing argument-related tasks. In an effort to illuminate this research void, our study introduces three hypotheses centered on the dynamics of claim, evidence and stance identification in argument mining tasks: 1) Omitting specific logical connectors in an argument does not change the implicit logical relationship, and LLMs can learn it from the modified context. 2) The importance of words or phrases in an argument is determined by the extent of implicit information they encapsulate, regardless of their individual components within the structure of the argument. 3) Removing crucial words or phrases from an argument alters the implicit logical relationship, making it impossible for LLMs to learn the original logic from the modified text.Through comprehensive assessments on the standard IAM dataset, it is revealed that information contained in the phrases within the argument has a greater impact on the understanding of the argument by large models, and the experiment results validate our hypothesis.
Paper Type: long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies
Loading