Linguistic Transformations in Argument Improvement: Analyzing Large Language Models’ Rewriting Strategies
Abstract: Text rewriting is a task that is related to, but different from, general text generation. While LLMs have been extensively studied on general text generation tasks, there is less research on text rewriting, and particularly on the behavior of models on this task. In this paper we analyze what changes LLMs in a text rewriting setting. We focus specifically on argumentative texts and their improvement, a task named Argument Improvement (ArgImp). We present an evaluation pipeline consisting of metrics on four linguistic levels. This pipeline is used to score improved arguments on diverse corpora and analyze the behavior of different LLMs on this task in terms of linguistic levels. By taking all four linguistic levels into consideration, we find that the models perform this task by shortening the vocabulary while simultaneously increasing average word length and merging sentences. Overall we note an increase in the persuasion and coherence dimensions. Our findings were made possible by splitting the analysis on the four linguistic levels in our evaluation pipeline.
Paper Type: Long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: style analysis,argument quality assessment,argument generation,evaluation
Languages Studied: english,german
Submission Number: 3586
Loading