Audience Evaluations of TV Characters in Reddit Communities
Keywords: Narrative Analysis, Social Norms, LLMs, Media Reception, Reddit
Abstract: Audience Evaluations of TV Characters in Reddit Communities
Across digital platforms, viewers debate fictional characters as if they were real moral agents, defending, condemning, and sometimes mobilizing entire communities around them. These discussions are not trivial fandom chatter but publicly observable expressions of collective judgment [Jenkins and Deuze, 2008, Zillman and Cantor, 1977]. Because audiences often frame these evaluations in moral terms, they also reveal the norms and values that shape narrative interpretation. Unlike surveys, online discourse provides naturalistic, time-stamped records of how audiences evaluate fictional characters in real time.
Prior work shows that narratives encode culturally embedded norms and moral structures [Rai et al., 2025, Mitran et al., 2025], while other studies analyze audience reception and reader response in digital environments [Mire et al., 2025, Bouzoubaa and Rezapour, 2024, Rezapour and Diesner, 2017]. Computational approaches to character modeling, however, have largely focused on structural properties of narratives rather than how audiences interpret and evaluate characters [Mian et al., 2026, Chang et al., 2025]. Recent work in computational narratology further highlights the importance of modeling the recipient of a narrative, the audience interpreting it [Piper et al., 2021].
In this work, we study how audiences evaluate fictional characters in large-scale online discourse. We ask: What drives audience sentiment toward characters? Why do some morally flawed characters remain beloved while others provoke backlash? And what reasons do audiences cite when praising or condemning them? To address these questions, we model character evaluations as structured expressions of collective judgment in online discussions [Fish, 2004].
We curate a large-scale dataset of Reddit discussions from the largest communities dedicated to 24 popular serial television series in Reddit’s “Movies & TV” category. We restrict to series that premiered after Reddit’s launch and retain only subreddits created before or shortly after the show premiere to capture contemporaneous audience discussion. Using the Reddit Historical Posts dataset, we collect over 30M posts and 50M comments. To focus on active viewing periods, we retain only posts created during season airing windows. We then extract character mentions using gpt-5-mini. Evaluation on a human-validated subset shows strong performance (precision 0.95, recall 0.91, F1 0.93). We therefore apply the model to the full dataset and exclude posts without identifiable character mentions from downstream analysis.
To analyze how audiences evaluate characters, we annotate two components for each character-centered post: the sentiment expressed toward the character (positive, negative, or neutral) and the reasons underlying that sentiment. We develop a theory-informed taxonomy of twelve evaluative dimensions grounded in communication, social psychology, and media studies. The taxonomy was refined through literature review, collaboration with communication scholars, and multiple rounds of pilot annotation. We conduct a pilot annotation study across five television series using three annotators per post. Sentiment-level agreement reaches moderate levels (Krippendorff’s α: 0.6-0.7). A key challenge was class imbalance, as a substantial proportion of posts are neutral, limiting reliable analysis of reason-level agreement. To address this, we employ active learning to sample posts more likely to contain positive or negative sentiment.
This project is ongoing. With expanded annotations, we will benchmark LLMs on sentiment and multi-label reason prediction against human judgments, enabling large-scale analysis across our Reddit corpus. At scale, this framework will allow us to examine temporal shifts in character evaluations, identify narrative events associated with changes in audience sentiment, and study how online communities interpret narrative developments and articulate moral expectations through discussions of fictional characters.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 137
Loading