Abstract: Today, Large language models (LLMs) are reshaping the norms of human communication, sometimes decoupling words from genuine human thought. This transformation is deep, and undermines the trust and interpretive norms that were historically tied to authorship. We draw from linguistic philosophy and AI ethics to detail how large-scale text generation can induce semantic drift, erode accountability, and obfuscate intent and authorship. Our work here introduces conceptual frameworks including hybrid authorship graphs (modeling humans, LLMs, and texts in a provenance network), epistemic doppelgängers (LLM-generated texts that are indistinguishable from human-authored texts), and authorship entropy. We explore mechanisms such as “proof-of-interaction” authorship verification and educational reforms to restore confidence in language. While LLMs' benefits are undeniable (broader access, increased fluency, automation, etc.), the upheavals they introduce to the linguistic landscape demand reckoning. This paper provides a conceptual lens to chart these changes.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: LLMs, semantic drift, AI ethics, proof-of-work, philosophy of language
Contribution Types: Position papers
Languages Studied: N/A, conceptual/multilingual focus
Submission Number: 3595
Loading