How LLMs Distort & Transform Our Language
Keywords: large language models, homogenization, writing
TL;DR: using llms as writing assistants changes the semantics, lexical range, emotional distribution, and intent of human writing
Abstract: Large language models (LLMs) are increasingly used as writing assistants for tasks such as revising text, generating suggestions, and improving clarity. However, it remains unclear whether these systems preserve users’ writing style, tone, or even intended meaning when editing text. In this work, we examine how LLMs alter the semantic and stylistic properties of human writing. Using a dataset of human-written essays and their revisions from before the release of LLMs, we study how asking an LLM to revise the essay based on the human-written feedback induces large changes in the resulting content and meaning. We then conduct a randomized controlled user study to understand how humans actually interact with LLMs when using them for writing. Our study of 50 users reveals that those who use an LLM to assist them showed similar alterations to their writing, and reported that the resulting essay was significantly less creative and not in their voice. Finally, we study how LLM use is already affecting our institutions, such as scientific peer review, altering the criteria for publication and assigning scores. These findings highlight a misalignment between the perceived benefit of AI use and an insidious and consistent underlying semantic change, motivating future work on how widespread AI writing will affect our cultural and scientific institutions.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
PDF: pdf
Submission Number: 178
Loading