Keywords: Embodied AI, relational intelligence, feminist epistemologies, multimodal reasoning (language, vision, affect), agent-based modeling (ABM), intersectional fairness, bias auditing, social grounding, human-centered AI, participatory design, responsible AI
Abstract: Conventional AI systems often lack awareness of social context and perpetuate biases, partly due to assumptions of objective, context-free knowledge. In contrast, feminist epistemologies emphasize that knowledge is situated and partial, shaped by the knower’s context. We argue that grounding AI in principles of situated knowledge, relationality, and care can produce more inclusive, socially aware intelligence, addressing systemic biases and blind spots in current models. We aim to formalize a cross-disciplinary framework for embodied, inclusive AI that bridges feminist theory and technical AI methods. Key objectives include encoding social context into AI decision making and ensuring that the resulting agents respect diverse perspectives and intersectional identities.
Our framework leverages agent-based modeling to simulate social environments and emergent dynamics, allowing AI agents to learn from contextualized interactions. Each agent integrates multimodal reasoning across language, vision, and affect, enabling social and emotional grounding in its environment. Feminist principles guide the design: agents maintain situated awareness (recognizing that perceptions and knowledge are partial and context-dependent) and relational reasoning (modeling interdependence and social relationships with other agents and humans). We incorporate fairness auditing with an intersectional lens, evaluating model outcomes across intersecting demographics (e.g., gender, race, language) to uncover biases that single-axis evaluations miss. In particular, the framework emphasizes support for low-resource and multilingual communities, auditing performance disparities, and adapting models for underrepresented languages and cultures.
Initial simulations indicate that embedding agents in rich social contexts can surface and mitigate harmful dynamics (e.g., biased resource allocation patterns) prior to deployment. Multimodal social grounding has improved agents’ interpretability and responsiveness to human affect, reducing miscommunication in human–AI interactions. Intersectional fairness audits have revealed compounding errors in marginalized subgroups, motivating new mitigation strategies (such as adaptive reweighting and data augmentation) that significantly reduce disparity metrics. Our contributions include: (1) a novel integration of feminist epistemology into AI system design; (2) a multi-agent, multimodal architecture for socially grounded AI; and (3) an intersectional evaluation toolkit for fairness in low-resource and multilingual settings.
This work advances a vision of AI that is not only intelligent but also socially conscious and equitable.
Submission Number: 66
Loading