Abstract: Large Language Models (LLMs) increasingly shape global discourse yet predominantly encode Western epistemological traditions. This position paper critically examines current approaches to cultural inclusivity in LLMs, arguing that they often rely on unidimensional metrics that inadequately capture cultural diversity. We advocate for Multiplexity---a framework recognizing multiple layers of existence, knowledge, and truth---as a theoretical foundation for developing more culturally inclusive language models. Our analysis demonstrates the limitations of traditional cultural alignment methods and highlights empirical evidence showing how Multiplexity-based interventions, particularly through Multi-Agent Systems, significantly improve cultural representation. By contrasting "Uniplexity" with Multiplexity, we address the epistemological limitations of current evaluation frameworks and propose moving beyond binary metrics toward multidimensional cultural evaluation. This paper contributes to ongoing efforts to mitigate cultural biases in AI systems, ultimately supporting more globally inclusive language technologies that respect diverse cultural perspectives.
Paper Type: Short
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: LLMs, Cultural Alignment, Inclusivity, Fairness, NLP
Contribution Types: Position papers
Languages Studied: English
Submission Number: 6922
Loading