What Is The Political Content in LLMs' Pre- and Post-Training Data?

Published: 03 Nov 2025, Last Modified: 03 Dec 2025EurIPS 2025 Workshop PAIG PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs, training data, political biases, alignment
TL;DR: The paper presents an analysis of the political content in pre- and post-training data from LLMs.
Abstract: Large language models (LLMs) are known to generate politically biased text, yet how such biases arise remains unclear. A crucial step toward answering this question is the analysis of training data, whose political content remains largely underexplored in current LLM research. To address this gap, we present in this paper an analysis of the pre- and post-training corpora of open-source models released together with their complete dataset. From these corpora, we draw large random samples, automatically annotate documents for political orientation, and analyze their source domains and content. We then assess how political content in the training data correlates with models' stance on specific policy issues. Our analysis shows that left-leaning documents predominate across datasets, with pre-training corpora containing significantly more politically engaged content than post-training data. We also find that left- and right-leaning documents frame similar topics through distinct arguments and that the predominant stance in the training data strongly correlates with models' political biases when evaluated on policy issues. Finally, pre-training corpora subjected to different filtering and curation procedures exhibit broadly similar characteristics with respect to their political content. These findings underscore the need to integrate political content analysis into future data curation pipelines as well as in-depth documentation of filtering strategies for transparency.
Submission Number: 16
Loading