An Information-Theoretic Study of RLHF-Induced Uniformity in Large Language Model Outputs

ACL ARR 2025 July Submission1150 Authors

29 Jul 2025 (modified: 03 Sept 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Reinforcement Learning with Human Feedback is an increasingly popular post-training procedure for Large Language Models (LLMs) to better align outputs with human values and increase output quality. As LLMs continue to be incorporated and improved for various modes of natural language communication, one might expect some sense of human-like audience design to be induced into LLMs. However, the effects of RLHF on the considerations that shape LLM text production is difficult to quantify. Thus, we propose employing an information-theoretic lens to investigate the changes in the "naturalness" of language and presence of audience design in LLMs trained using fine-tuning and RLHF methods. On the basis of the *Uniform Information Density (UID) Hypothesis*, which posits that humans optimize their production of language to transfer information uniformly across a noisy channel, we analyze and compare how information is distributed within model-generated and human-generated text belonging to various domains to investigate the presence and form of audience design in LLMs. With two primary metrics of information uniformity, surprisal variance and local consistency, we find that RLHF seems to encourage less variance in information rates across generations, while fine-tuning decreases uniformity, shifting distributions slightly in the direction of human-generated text. However, models still exhibit significantly superhuman uniformity across various domains of text. Our results reveal that while modern LLM training and fine-tuning paradigms have made progress in approximating human-like information distributions, systematic differences persist.
Paper Type: Long
Research Area: Syntax: Tagging, Chunking and Parsing
Research Area Keywords: cognitive modeling, language modeling, computational psycholinguistics, psycholinguistics, uniform information density, information theory
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Previous URL: https://openreview.net/forum?id=IvKRSI88f7
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: No, I want the same area chair from our previous submission (subject to their availability).
Reassignment Request Reviewers: Yes, I want a different set of reviewers
Justification For Not Keeping Action Editor Or Reviewers: We believe reviewer 4dfk lacked clarity in their reviews, as we did not understand their concerns about the theoretical limitations of surprisal as a measure of information content. The reviewer also did not follow up to our response asking for more clarification on their concerns. We also believe their dismissal of our work as "not novel" and "not surprising" to be in violation of the reviewer guidelines, common review issues H1 and H3, as well as lacking concrete reasons for believing so.
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: imitations - Potential Risks
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: All datasets and models used are described and cited in section 3: 3.1 for datasets, 3.2 for models. Formulas taken from other works are also cited, in section 3.3.
B2 Discuss The License For Artifacts: N/A
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: Section 3.1 describes our use of artifacts and potential misalignments with intended use.
B4 Data Contains Personally Identifying Info Or Offensive Content: No
B4 Elaboration: The actual human generators of our human text data were irrelevant for our experiment. Offensiveness of content was also not relevant to analysis. Further descriptions of data anonymity/content can be found in cited dataset papers.
B5 Documentation Of Artifacts: Yes
B5 Elaboration: Section 3.1
B6 Statistics For Data: Yes
B6 Elaboration: Section 3.1
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Section 3.2, Appendix C
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Section 3.1, 3.2, Appendix A, C
C3 Descriptive Statistics: Yes
C3 Elaboration: Sections 4.2, 5.2
C4 Parameters For Packages: Yes
C4 Elaboration: Appendix A, C
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: No
E1 Information About Use Of Ai Assistants: N/A
Author Submission Checklist: yes
Submission Number: 1150
Loading