The Generative Trust Paradox: Evaluating the Impact of AI-Driven Modular Personalization on Audience Epistemic Confidence and Newsroom Economics
Keywords: Computational Journalism, Generative AI, News Personalization, Epistemic Trust, Recommender Systems, LLM Safety
TL;DR: Mod-AI locks facts while personalizing style, boosting engagement 60% with only 16% trust cost—unconstrained LLMs lose 53%.
Abstract: The integration of Large Language Models (LLMs) into computational journalism introduces a critical tension we term the Generative Trust Paradox: hyper-personalization maximizes engagement while degrading epistemic confidence. We propose Mod-AI (Modular AI), a framework that decouples immutable factual reporting from mutable stylistic presentation through a cryptographically locked Factual Matrix, generating three constrained variants (analytical, concise, narrative).
We evaluate Mod-AI on MIND-small (48,254 articles, 156,965 behavior logs) and MIND-large (95,411 articles, 2.2M behavior logs) using stratified samples with Wikidata-linked entity annotations and empirical CTR calibration. Mod-AI achieves a 60% engagement uplift while retaining 84% of baseline trust ($3.51$ vs. $4.15$), compared to unconstrained generation’s catastrophic 53% trust loss. These findings replicate on MIND-large with near-identical effect sizes (Cohen’s $d = 3.32$ and $d = 2.57$). All differences are statistically significant ($p < 10^{-300}$; bootstrap 95% CI: $[1.53, 1.57]$).
Submission Number: 5
Loading