Keywords: Bidirectional Alignment, AI, LLM, Diversity and Inclusion, Narrative Generation
TL;DR: Aligning LLMs for inclusive narratives requires bidirectional human-AI alignment—refining AI while fostering critical human engagement to recognize biases and epistemic gaps in historically silenced voices.
Abstract: Aligning Large Language Models (LLMs) for narrative generation demands more than model refinement. For narratives of marginalized communities, whose voices are historically silenced or distorted, a purely AI-centric alignment is insufficient. This tiny paper argues for bidirectional human-AI alignment, emphasizing critical human engagement alongside AI development. Through literary case studies—Virginia Woolf's Judith Shakespeare and Saidiya Hartman's Venus—we demonstrate that LLMs inherit and propagate historical biases, reflecting deep epistemic gaps. Addressing these requires human interpretation to recognize data limitations and embedded assumptions. True alignment for inclusive narratives necessitates both refined AI and informed human participation, fostering AI literacy and critical engagement with LLM outputs. This bidirectional approach is crucial for ensuring AI contributes meaningfully to representative storytelling, a key challenge for inclusive AI research.
Submission Type: Tiny Paper (2 Pages)
Archival Option: This is an archival submission
Presentation Venue Preference: CHI 2025
Submission Number: 27
Loading