Participatory AI and the EU AI Act

Published: 14 Oct 2025, Last Modified: 08 Jan 2026Proceedings of the AAAI/ACM Conference on AI, Ethics, and SocietyEveryoneCC BY-NC-SA 4.0
Abstract: Participatory AI calls for the involvement of stakeholders in AI design, development, evaluation, and deployment to attain more inclusive, transparent, and accountable AI. However, actual implementations of participatory AI remain little incentivized by governments, despite appeals issued by academia and also industry. In this work, we investigate the role of 'participation' in the obligations of AI system providers and deployers set out by the EU AI Act. First, we analyze the gaps between the participation explicitly stated in the non-binding recitals of the AI Act and the provisions of the Act itself, showing that the legal demand for participation is limited. For example, neither Article 9 on risk management systems nor Article 27 on the fundamental rights impact assessment mention any form of participation. Article 95 on the voluntary codes of conduct is the only enacting term that explicitly suggests stakeholder participation. Second, based on these results, we analyze opportunities for participation emerging from the obligations of high-risk AI system providers and deployers (AI Act, Chapter III, Sections 2 and 3). We identify five clusters of obligations with participatory opportunities: risk management, data and data governance, information provision, resilience testing, and impact assessment. Third, we provide examples of use cases for each of the identified opportunities for participation. This work contributes to a better understanding of regulatory demands and practical opportunities regarding participatory AI in the context of the AI Act.
Loading